Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix Deprecation Warnings torch.GradScaler and torch.autocast #930

Merged
merged 3 commits into from
Jan 17, 2025

Conversation

IliasChair
Copy link
Contributor

@IliasChair IliasChair commented Nov 30, 2024

This PR replaces all occurrences of torch.cuda.amp.GradScaler(args...) and torch.cuda.amp.autocast(args...) with torch.GradScaler("cuda", args...) and torch.autocast("cuda", args...) respectively.

This fixes the deprecation warnings during training and inference like:

FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast("cuda", args...)` instead.
  with torch.cuda.amp.autocast(enabled=self.scaler is not None):

see PyTorch Docs

P.S. I'm not sure about the difference between torch.autocast("cuda", args...) and torch.amp.autocast("cuda", args...) (as seen in the deprecation warning). But I would to prefer to stick to the Documentation. E.g. only using torch.autocast without the amp

…`torch.autocast("cuda", args...)`

- replace all occurrences of `torch.cuda.amp.GradScaler(args...)` with `torch.GradScaler("cuda", args...)`
Copy link

codecov bot commented Nov 30, 2024

@rayg1234 rayg1234 added enhancement New feature or request patch Patch version release labels Dec 6, 2024
@rayg1234
Copy link
Collaborator

rayg1234 commented Dec 6, 2024

@IliasChair thanks for helping us clean this up! This should be fine i would prefer to get the device instead blanket casting to cuda (there are tests that rely on cpu autocast) ie:
with torch.autocast(x.device, enabled=False)

but if the tests pass and the all the old code is using cuda.autocast anyways, it should be fine

@IliasChair
Copy link
Contributor Author

Hi,
All the changes I made are in portions of the code that currently support GPU-based autocasting only. I don't see any existing code paths supporting CPU autocasting, so these changes should not introduce any issues.

If adding support for CPU autocasting is a planned feature, I believe it would still make sense to merge these changes first and then create a separate issue for that later.

As I see it, the codebase is largely tailored for GPU usage, particularly for more intensive tasks. Adding full CPU support would likely be a larger endavour, and it may not be worth the effort are going to use GPUs for anything machine learning related anyway.

That said, if I missed anything, I'd be happy to update my PR accordingly.

Best regards,
Ilias

@rayg1234
Copy link
Collaborator

sorry forgot to approve this! will merge once test passes!

@rayg1234 rayg1234 added this pull request to the merge queue Jan 16, 2025
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Jan 16, 2025
@rayg1234 rayg1234 added this pull request to the merge queue Jan 16, 2025
Merged via the queue into FAIR-Chem:main with commit 6dcfca7 Jan 17, 2025
7 checks passed
misko pushed a commit that referenced this pull request Jan 17, 2025
…`torch.autocast("cuda", args...)` (#930)

- replace all occurrences of `torch.cuda.amp.GradScaler(args...)` with `torch.GradScaler("cuda", args...)`

Co-authored-by: iliaschair <[email protected]>
Co-authored-by: Ray <[email protected]>
Former-commit-id: 9de1edb636dff0a5ca640658ee10975e91aef7df
mshuaibii pushed a commit that referenced this pull request Jan 17, 2025
…`torch.autocast("cuda", args...)` (#930)

- replace all occurrences of `torch.cuda.amp.GradScaler(args...)` with `torch.GradScaler("cuda", args...)`

Co-authored-by: iliaschair <[email protected]>
Co-authored-by: Ray <[email protected]>
Former-commit-id: fe6bb3248d782c629e539164d5bc7f69b9ee37d0
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request patch Patch version release
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants